In mathematics, in particular linear algebra, the matrix determinant lemma[1][2] computes the determinant of the sum of an invertible matrix A and the dyadic product, u vT, of a column vector u and a row vector vT.
Contents |
Suppose A is an invertible square matrix and u, v are column vectors. Then the matrix determinant lemma states that
Here, uvT is the dyadic product of two vectors u and v.
First the proof of the special case A = I follows from the equality:[3]
The determinant of the left hand side is the product of the determinants of the three matrices. Since the first and third matrix are triangle matrices with unit diagonal, their determinants are just 1. The determinant of the middle matrix is our desired value. The determinant of the right hand side is simply (1 + vTu). So we have the result:
Then the general case can be found as:
If the determinant and inverse of A are already known, the formula provides a numerically cheap way to compute the determinant of A corrected by the matrix uvT. The computation is relatively cheap because the determinant of A+uvT does not have to be computed from scratch (which in general is expensive). Using unit vectors for u and/or v, individual columns, rows or elements[4] of A may be manipulated and a correspondingly updated determinant computed relatively cheaply in this way.
When the matrix determinant lemma is used in conjunction with the Sherman-Morrison formula, both the inverse and determinant may be conveniently updated together.
Suppose A is an invertible n-by-n matrix and U, V are n-by-m matrices. Then
In the special case this is Sylvester's theorem for determinants.
Given additionally an invertible m-by-m matrix W, the relationship can also be expressed as